-
Write something in the document below!
- There is at least one public document in every node in the Agora. Whatever you write in it will be integrated and made available for the next visitor to read and edit.
- Write to the Agora from social media.
-
Sign up as a full Agora user.
- As a full user you will be able to contribute your personal notes and resources directly to this knowledge commons. Some setup required :)
perceptron
Go back to the [[AI Glossary]]
A system (either hardware or software) that takes in one or more input values, runs a function on the weighted sum of the inputs, and computes a single output value. In machine learning, the function is typically nonlinear, such as ReLU, sigmoid, or tanh. For example, the following perceptron relies on the sigmoid function to process three input values:
In the following illustration, the perceptron takes three inputs, each of which is itself modified by a weight before entering the perceptron:
stateDiagram
Input1 --> Perceptron : weight1
Input2 --> Perceptron : weight2
Input3 --> Perceptron : weight3
Perceptron --> [*] : output
A perceptron that takes in 3 inputs, each multiplied by separate weights. The perceptron outputs a single value.
Perceptrons are the (nodes) in deep neural networks. That is, a deep neural network consists of multiple connected perceptrons, plus a backpropagation algorithm to introduce feedback.
Perceptron
Perceptron Learning
Go to [[Week 2 - Introduction]] or back to the [[Main AI Page]] An extension of the page on [[Neural Networks]]
Perceptrons
Go to [[Week 2 - Introduction]] or back to the [[Main AI Page]]
Part of the page on [[Neural Networks]]
The simplest form of [[Neural Networks]], with a single layer with input nodes connected directly to an output node.
input layers forward the input values to the next layer by means of multiplying by a weight and then summing the results.
Neural networks are vastly more complicated versions of this, with layers upon layers forwarding values through weights, one for each neuron in the next layer. This allows for incredible fine-tuning as the number of neurons increases.
- public document at doc.anagora.org/perceptron|perceptron
- video call at meet.jit.si/perceptron|perceptron